10 resultados para CHD Prediction, Blood Serum Data Chemometrics Methods

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

AMS Subj. Classification: 62P10, 62H30, 68T01

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of discussed optimal valid partitioning (OVP) methods is uncovering of ordinal or continuous explanatory variables effect on outcome variables of different types. The OVP approach is based on searching partitions of explanatory variables space that in the best way separate observations with different levels of outcomes. Partitions of single variables ranges or two-dimensional admissible areas for pairs of variables are searched inside corresponding families. Statistical validity associated with revealed regularities is estimated with the help of permutation test repeating search of optimal partition for each permuted dataset. Method for output regularities selection is discussed that is based on validity evaluating with the help of two types of permutation tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62F15.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the results of our data mining study of Pb-Zn (lead-zinc) ore assay records from a mine enterprise in Bulgaria. We examined the dataset, cleaned outliers, visualized the data, and created dataset statistics. A Pb-Zn cluster data mining model was created for segmentation and prediction of Pb-Zn ore assay data. The Pb-Zn cluster data model consists of five clusters and DMX queries. We analyzed the Pb-Zn cluster content, size, structure, and characteristics. The set of the DMX queries allows for browsing and managing the clusters, as well as predicting ore assay records. A testing and validation of the Pb-Zn cluster data mining model was developed in order to show its reasonable accuracy before beingused in a production environment. The Pb-Zn cluster data mining model can be used for changes of the mine grinding and floatation processing parameters in almost real-time, which is important for the efficiency of the Pb-Zn ore beneficiation process. ACM Computing Classification System (1998): H.2.8, H.3.3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The real purpose of collecting big data is to identify causality in the hope that this will facilitate credible predictivity . But the search for causality can trap one into infinite regress, and thus one takes refuge in seeking associations between variables in data sets. Regrettably, the mere knowledge of associations does not enable predictivity. Associations need to be embedded within the framework of probability calculus to make coherent predictions. This is so because associations are a feature of probability models, and hence they do not exist outside the framework of a model. Measures of association, like correlation, regression, and mutual information merely refute a preconceived model. Estimated measures of associations do not lead to a probability model; a model is the product of pure thought. This paper discusses these and other fundamentals that are germane to seeking associations in particular, and machine learning in general. ACM Computing Classification System (1998): H.1.2, H.2.4., G.3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different types of ontologies and knowledge or metaknowledge connected to them are considered and analyzed aiming at realization in contemporary information security systems (ISS) and especially the case of intrusion detection systems (IDS) or intrusion prevention systems (IPS). Human-centered methods INCONSISTENCY, FUNNEL, CALEIDOSCOPE and CROSSWORD are algorithmic or data-driven methods based on ontologies. All of them interact on a competitive principle ‘survival of the fittest’. They are controlled by a Synthetic MetaMethod SMM. It is shown that the data analysis frequently needs an act of creation especially if it is applied to knowledge-poor environments. It is shown that human-centered methods are very suitable for resolutions in case, and often they are based on the usage of dynamic ontologies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes some approaches to problem of testing and documenting automation in information systems with graphical user interface. Combination of data mining methods and theory of finite state machines is used for testing automation. Automated creation of software documentation is based on using metadata in documented system. Metadata is built on graph model. Described approaches improve performance and quality of testing and documenting processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* Supported by the Army Research Office under grant DAAD-19-02-10059.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The “trial and error” method is fundamental for Master Minddecision algorithms. On the basis of Master Mind games and strategies weconsider some data mining methods for tests using students as teachers.Voting, twins, opposite, simulate and observer methods are investigated.For a pure data base these combinatorial algorithms are faster then manyAI and Master Mind methods. The complexities of these algorithms arecompared with basic combinatorial methods in AI. ACM Computing Classification System (1998): F.3.2, G.2.1, H.2.1, H.2.8, I.2.6.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper focuses on the development of methods and cascade of models for flood monitoring and forecasting and its implementation in Grid environment. The processing of satellite data for flood extent mapping is done using neural networks. For flood forecasting we use cascade of models: regional numerical weather prediction (NWP) model, hydrological model and hydraulic model. Implementation of developed methods and models in the Grid infrastructure and related projects are discussed.